特征选择是机器学习的重要过程。它通过选择对预测目标贡献最大的功能来构建一个可解释且健壮的模型。但是,大多数成熟的特征选择算法,包括受监督和半监督,无法完全利用特征之间的复杂潜在结构。我们认为,这些结构对于特征选择过程非常重要,尤其是在缺乏标签并且数据嘈杂的情况下。为此,我们创新地向特征选择问题(即基于批量注意的自我划分特征选择(A-SFS))进行了创新的深入的自我监督机制。首先,多任务自我监督的自动编码器旨在在两个借口任务的支持下揭示功能之间的隐藏结构。在来自多自制的学习模型的集成信息的指导下,批处理注意机制旨在根据基于批处理的特征选择模式产生特征权重,以减轻少数嘈杂数据引入的影响。将此方法与14个主要强大基准进行了比较,包括LightGBM和XGBoost。实验结果表明,A-SFS在大多数数据集中达到了最高的精度。此外,这种设计大大降低了对标签的依赖,仅需1/10个标记的数据即可达到与那些先进的基线相同的性能。结果表明,A-SFS对于嘈杂和缺少数据也是最强大的。
translated by 谷歌翻译
随着最近深度卷积神经网络的进步,一般面临的概念取得了重大进展。然而,最先进的一般面部识别模型对遮挡面部图像没有概括,这正是现实世界场景中的常见情况。潜在原因是用于训练和特定设计的大规模遮挡面部数据,用于解决闭塞所带来的损坏功能。本文提出了一种新颖的面部识别方法,其基于单端到端的深神经网络的闭塞是强大的。我们的方法(使用遮挡掩码)命名(面部识别),学会发现深度卷积神经网络的损坏功能,并通过动态学习的面具清洁它们。此外,我们构建了大规模的遮挡面部图像,从有效且有效地培训。与现有方法相比,依靠外部探测器发现遮挡或采用较少鉴别的浅模型的现有方法,从简单且功能强大。 LFW,Megaface挑战1,RMF2,AR数据集和其他模拟遮挡/掩蔽数据集的实验结果证实,从大幅提高了遮挡下的准确性,并概括了一般面部识别。
translated by 谷歌翻译
随着近期神经网络的成功,对人脸识别取得了显着进展。然而,收集面部识别的大规模现实世界培训数据已经挑战,特别是由于标签噪音和隐私问题。同时,通常从网络图像收集现有的面部识别数据集,缺乏关于属性的详细注释(例如,姿势和表达),因此对面部识别的不同属性的影响已经很差。在本文中,我们使用合成面部图像,即Synface来解决面部识别中的上述问题。具体而言,我们首先探讨用合成和真实面部图像训练的最近最先进的人脸识别模型之间的性能差距。然后,我们分析了性能差距背后的潜在原因,例如,较差的阶级变化和合成和真实面部图像之间的域间隙。灵感来自于此,我们使用身份混合(IM)和域混合(DM)设计了SYNFACE,以减轻上述性能差距,展示了对面部识别的综合数据的巨大潜力。此外,利用可控的面部合成模型,我们可以容易地管理合成面代的不同因素,包括姿势,表达,照明,身份的数量和每个身份的样本。因此,我们还对综合性面部图像进行系统实证分析,以提供一些关于如何有效利用综合数据进行人脸识别的见解。
translated by 谷歌翻译
Accurate determination of a small molecule candidate (ligand) binding pose in its target protein pocket is important for computer-aided drug discovery. Typical rigid-body docking methods ignore the pocket flexibility of protein, while the more accurate pose generation using molecular dynamics is hindered by slow protein dynamics. We develop a tiered tensor transform (3T) algorithm to rapidly generate diverse protein-ligand complex conformations for both pose and affinity estimation in drug screening, requiring neither machine learning training nor lengthy dynamics computation, while maintaining both coarse-grain-like coordinated protein dynamics and atomistic-level details of the complex pocket. The 3T conformation structures we generate are closer to experimental co-crystal structures than those generated by docking software, and more importantly achieve significantly higher accuracy in active ligand classification than traditional ensemble docking using hundreds of experimental protein conformations. 3T structure transformation is decoupled from the system physics, making future usage in other computational scientific domains possible.
translated by 谷歌翻译
Feature selection helps reduce data acquisition costs in ML, but the standard approach is to train models with static feature subsets. Here, we consider the dynamic feature selection (DFS) problem where a model sequentially queries features based on the presently available information. DFS is often addressed with reinforcement learning (RL), but we explore a simpler approach of greedily selecting features based on their conditional mutual information. This method is theoretically appealing but requires oracle access to the data distribution, so we develop a learning approach based on amortized optimization. The proposed method is shown to recover the greedy policy when trained to optimality and outperforms numerous existing feature selection methods in our experiments, thus validating it as a simple but powerful approach for this problem.
translated by 谷歌翻译
Deep neural networks are vulnerable to adversarial attacks. In this paper, we take the role of investigators who want to trace the attack and identify the source, that is, the particular model which the adversarial examples are generated from. Techniques derived would aid forensic investigation of attack incidents and serve as deterrence to potential attacks. We consider the buyers-seller setting where a machine learning model is to be distributed to various buyers and each buyer receives a slightly different copy with same functionality. A malicious buyer generates adversarial examples from a particular copy $\mathcal{M}_i$ and uses them to attack other copies. From these adversarial examples, the investigator wants to identify the source $\mathcal{M}_i$. To address this problem, we propose a two-stage separate-and-trace framework. The model separation stage generates multiple copies of a model for a same classification task. This process injects unique characteristics into each copy so that adversarial examples generated have distinct and traceable features. We give a parallel structure which embeds a ``tracer'' in each copy, and a noise-sensitive training loss to achieve this goal. The tracing stage takes in adversarial examples and a few candidate models, and identifies the likely source. Based on the unique features induced by the noise-sensitive loss function, we could effectively trace the potential adversarial copy by considering the output logits from each tracer. Empirical results show that it is possible to trace the origin of the adversarial example and the mechanism can be applied to a wide range of architectures and datasets.
translated by 谷歌翻译
Video Super-Resolution (VSR) aims to restore high-resolution (HR) videos from low-resolution (LR) videos. Existing VSR techniques usually recover HR frames by extracting pertinent textures from nearby frames with known degradation processes. Despite significant progress, grand challenges are remained to effectively extract and transmit high-quality textures from high-degraded low-quality sequences, such as blur, additive noises, and compression artifacts. In this work, a novel Frequency-Transformer (FTVSR) is proposed for handling low-quality videos that carry out self-attention in a combined space-time-frequency domain. First, video frames are split into patches and each patch is transformed into spectral maps in which each channel represents a frequency band. It permits a fine-grained self-attention on each frequency band, so that real visual texture can be distinguished from artifacts. Second, a novel dual frequency attention (DFA) mechanism is proposed to capture the global frequency relations and local frequency relations, which can handle different complicated degradation processes in real-world scenarios. Third, we explore different self-attention schemes for video processing in the frequency domain and discover that a ``divided attention'' which conducts a joint space-frequency attention before applying temporal-frequency attention, leads to the best video enhancement quality. Extensive experiments on three widely-used VSR datasets show that FTVSR outperforms state-of-the-art methods on different low-quality videos with clear visual margins. Code and pre-trained models are available at https://github.com/researchmm/FTVSR.
translated by 谷歌翻译
Due to the issue that existing wireless sensor network (WSN)-based anomaly detection methods only consider and analyze temporal features, in this paper, a self-supervised learning-based anomaly node detection method based on an autoencoder is designed. This method integrates temporal WSN data flow feature extraction, spatial position feature extraction and intermodal WSN correlation feature extraction into the design of the autoencoder to make full use of the spatial and temporal information of the WSN for anomaly detection. First, a fully connected network is used to extract the temporal features of nodes by considering a single mode from a local spatial perspective. Second, a graph neural network (GNN) is used to introduce the WSN topology from a global spatial perspective for anomaly detection and extract the spatial and temporal features of the data flows of nodes and their neighbors by considering a single mode. Then, the adaptive fusion method involving weighted summation is used to extract the relevant features between different models. In addition, this paper introduces a gated recurrent unit (GRU) to solve the long-term dependence problem of the time dimension. Eventually, the reconstructed output of the decoder and the hidden layer representation of the autoencoder are fed into a fully connected network to calculate the anomaly probability of the current system. Since the spatial feature extraction operation is advanced, the designed method can be applied to the task of large-scale network anomaly detection by adding a clustering operation. Experiments show that the designed method outperforms the baselines, and the F1 score reaches 90.6%, which is 5.2% higher than those of the existing anomaly detection methods based on unsupervised reconstruction and prediction. Code and model are available at https://github.com/GuetYe/anomaly_detection/GLSL
translated by 谷歌翻译
With the increase in health consciousness, noninvasive body monitoring has aroused interest among researchers. As one of the most important pieces of physiological information, researchers have remotely estimated the heart rate (HR) from facial videos in recent years. Although progress has been made over the past few years, there are still some limitations, like the processing time increasing with accuracy and the lack of comprehensive and challenging datasets for use and comparison. Recently, it was shown that HR information can be extracted from facial videos by spatial decomposition and temporal filtering. Inspired by this, a new framework is introduced in this paper to remotely estimate the HR under realistic conditions by combining spatial and temporal filtering and a convolutional neural network. Our proposed approach shows better performance compared with the benchmark on the MMSE-HR dataset in terms of both the average HR estimation and short-time HR estimation. High consistency in short-time HR estimation is observed between our method and the ground truth.
translated by 谷歌翻译
Currently, most deep learning methods cannot solve the problem of scarcity of industrial product defect samples and significant differences in characteristics. This paper proposes an unsupervised defect detection algorithm based on a reconstruction network, which is realized using only a large number of easily obtained defect-free sample data. The network includes two parts: image reconstruction and surface defect area detection. The reconstruction network is designed through a fully convolutional autoencoder with a lightweight structure. Only a small number of normal samples are used for training so that the reconstruction network can be A defect-free reconstructed image is generated. A function combining structural loss and $\mathit{L}1$ loss is proposed as the loss function of the reconstruction network to solve the problem of poor detection of irregular texture surface defects. Further, the residual of the reconstructed image and the image to be tested is used as the possible region of the defect, and conventional image operations can realize the location of the fault. The unsupervised defect detection algorithm of the proposed reconstruction network is used on multiple defect image sample sets. Compared with other similar algorithms, the results show that the unsupervised defect detection algorithm of the reconstructed network has strong robustness and accuracy.
translated by 谷歌翻译